Results for 'Ai Leen Choo'

1000+ found
Order:
  1. The Epistemic Significance of Religious Disagreements: Cases of Unconfirmed Superiority Disagreements.Frederick Choo - 2021 - Topoi 40 (5):1139-1147.
    Religious disagreements are widespread. Some philosophers have argued that religious disagreements call for religious skepticism, or a revision of one’s religious beliefs. In order to figure out the epistemic significance of religious disagreements, two questions need to be answered. First, what kind of disagreements are religious disagreements? Second, how should one respond to such disagreements? In this paper, I argue that many religious disagreements are cases of unconfirmed superiority disagreements, where parties have good reason to think they are not epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  2. Scorekeeping in Debates between Non-Naturalism and Its Opponents: On Parfit's Last Statement in Metaethics.Dong-Ryul Choo - 2020 - 철학적 분석 (Philosophical Analysis) 44:1-29.
    [English abstract] In his last metaethical statement, Parfit revisits his earlier arguments for non-metaphysical normative non-naturalism , and points to the possibility of convergence between his view and Railton's non-analytical normative naturalism. I examine the basis of this convergence claim and find it unpersuasive, mainly because if their views converge on the same position, Parfit's non-natural norms exist only as predicates. In order to avoid this consequence, he needs to present a reason for believing in the existence of normative properties (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  3. Can a Worship-worthy Agent Command Others to Worship It?Frederick Choo - 2022 - Religious Studies 58 (1):79-95.
    This article examines two arguments that a worship-worthy agent cannot command worship. The first argument is based on the idea that any agent who commands worship is egotistical, and hence not worship-worthy. The second argument is based on Campbell Brown and Yujin Nagasawa's (2005) idea that people cannot comply with the command to worship because if people are offering genuine worship, they cannot be motivated by a command to do so. One might then argue that a worship-worthy agent would have (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  4. The Free Will Defense Revisited: The Instrumental Value of Significant Free Will.Frederick Choo & Esther Goh - 2019 - International Journal of Theology, Philosophy and Science 4:32-45.
    Alvin Plantinga has famously responded to the logical problem of evil by appealing to the intrinsic value of significant free will. A problem, however, arises because traditional theists believe that both God and the redeemed who go to heaven cannot do wrong acts. This entails that both God and the redeemed in heaven lack significant freedom. If significant freedom is indeed valuable, then God and the redeemed in heaven would lack something intrinsically valuable. However, if significant freedom is not intrinsically (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  5. Telling Others to Do What You Believe Is Morally Wrong: The Case of Confucius and Zai Wo.Frederick Choo - 2019 - Asian Philosophy 29 (2):106-115.
    Can it ever be morally justifiable to tell others to do what we ourselves believe is morally wrong to do? The common sense answer is no. It seems that we should never tell others to do something if we think it is morally wrong to do that act. My first goal is to argue that in Analects 17.21, Confucius tells his disciple not to observe a ritual even though Confucius himself believes that it is morally wrong that one does not (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  6. EQUALITY, COMMUNITY, AND THE SCOPE OF DISTRIBUTIVE JUSTICE: A PARTIAL DEFENSE OF COHEN's VISION.Dong-Ryul Choo - 2014 - Socialist Studies 10 (1):152-173.
    Luck egalitarians equalize the outcome enjoyed by people who exemplify the same degree of distributive desert by removing the influence of luck. They also try to calibrate differential rewards according to the pattern of distributive desert. This entails that they have to decide upon, among other things, the rate of reward, i.e., a principled way of distributing rewards to groups exercising different degrees of the relevant desert. However, the problem of the choice of reward principle is a relatively and undeservedly (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Addressing two recent challenges to the factive account of knowledge.Esther Goh & Frederick Choo - 2022 - Synthese 200 (435):1-14.
    It is widely thought that knowledge is factive – only truths can be known. However, this view has been recently challenged. One challenge appeals to approximate truths. Wesley Buckwalter and John Turri argue that false-but-approximately-true propositions can be known. They provide experimental findings to show that their view enjoys intuitive support. In addition, they argue that we should reject the factive account of knowledge to avoid widespread skepticism. A second challenge, advanced by Nenad Popovic, appeals to multidimensional geometry to build (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. "Do I Have to Be Tested?": Understanding Reluctance to Be Screened for COVID-19.Aron Egelko, Leen Arnaout, Joshua Garoon, Carl Streed & Zackary Berger - 2020 - American Journal of Public Health 110 (12).
    Download  
     
    Export citation  
     
    Bookmark  
  9. Unravelling the Methodology of Causal Pluralism.Anton Froeyman & Leen De Vreese - 2008 - Philosophica 81 (1).
    In this paper we try to bring some clarification in the recent debate on causal pluralism. Our first aim is to clarify what it means to have a pluralistic theory of causation and to articulate the criteria by means of which a certain theory of causation can or cannot qualify as a pluralistic theory of causation. We also show that there is currently no theory on the\nmarket which meets these criteria, and therefore no full-blown pluralist theory of causation exists. Because (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Saliva Ontology: An ontology-based framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the salivaomics (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  11. Bioinformatics advances in saliva diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in saliva (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  12. Towards a Body Fluids Ontology: A unified application ontology for basic and translational science.Jiye Ai, Mauricio Barcellos Almeida, André Queiroz De Andrade, Alan Ruttenberg, David Tai Wai Wong & Barry Smith - 2011 - Second International Conference on Biomedical Ontology , Buffalo, Ny 833:227-229.
    We describe the rationale for an application ontology covering the domain of human body fluids that is designed to facilitate representation, reuse, sharing and integration of diagnostic, physiological, and biochemical data, We briefly review the Blood Ontology (BLO), Saliva Ontology (SALO) and Kidney and Urinary Pathway Ontology (KUPO) initiatives. We discuss the methods employed in each, and address the project of using them as starting point for a unified body fluids ontology resource. We conclude with a description of how the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Uma história da educação química brasileira: sobre seu início discutível apenas a partir dos conquistadores.Ai Chassot - 1996 - Episteme 1 (2):129-145.
    Download  
     
    Export citation  
     
    Bookmark  
  14.  67
    Đề cương học phần Văn hóa kinh doanh.Đại học Thuongmai - 2012 - Thuongmai University Portal.
    ĐỀ CƯƠNG HỌC PHẦN VĂN HÓA KINH DOANH 1. Tên học phần: VĂN HÓA KINH DOANH (BUSINESS CULTURE) 2. Mã học phần: BMGM1221 3. Số tín chỉ: 2 (24,6) (để học được học phần này, người học phải dành ít nhất 60 giờ chuẩn bị cá nhân).
    Download  
     
    Export citation  
     
    Bookmark  
  15. Đổi mới chế độ sở hữu trong nền kinh tế thị trường định hướng xã hội chủ nghĩa ở Việt Nam.Võ Đại Lược - 2021 - Tạp Chí Khoa Học Xã Hội Việt Nam 7:3-13.
    Hiện nay, chế độ sở hữu ở Việt Nam đã có những đổi mới cơ bản, nhưng vẫn còn những khác biệt rất lớn so với chế độ sở hữu ở các nền kinh tế thị trường hiện đại. Trong cơ cấu của chế độ sở hữu ở Việt Nam, tỷ trọng của sở hữu nhà nước còn quá lớn; kinh tế nhà nước giữ vai trò chủ đạo… Chính những khác biệt này đã làm cho nền kinh tế thị (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Tiếp tục đổi mới, hoàn thiện chế độ sở hữu trong nền kinh tế thị trường định hướng XHCN ở Việt Nam.Võ Đại Lược - 2021 - Tạp Chí Mặt Trận 2021 (8):1-7.
    (Mặt trận) - Chế độ sở hữu trong nền kinh tế thị trường định hướng xã hội chủ nghĩa Việt Nam trước hết phải tuân theo các nguyên tắc của nền kinh tế thị trường hiện đại. Trong các nguyên tắc của nền kinh tế thị trường hiện đại, nguyên tắc sở hữu tư nhân là nền tảng của nền kinh tế thị trường - là nguyên tắc quan trọng. Xa rời nguyên tắc này, dù chúng ta cố gắng xây (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Đổi mới chế độ sở hữu trong nền kinh tế thị trường định hướng xã hội chủ nghĩa ở Việt Nam.Võ Đại Lược - 2021 - Khoa Học Xã Hội Việt Nam 2021 (7):3-13.
    Hiện nay, chế độ sở hữu ở Việt Nam đã có những đổi mới cơ bản, nhưng vẫn còn những khác biệt rất lớn so với chế độ sở hữu ở các nền kinh tế thị trường hiện đại. Trong cơ cấu của chế độ sở hữu ở Việt Nam, tỷ trọng của sở hữu nhà nước còn quá lớn; kinh tế nhà nước giữ vai trò chủ đạo… Chính những khác biệt này đã làm cho nền kinh tế thị (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. The Blood Ontology: An ontology in the domain of hematology.Almeida Mauricio Barcellos, Proietti Anna Barbara de Freitas Carneiro, Ai Jiye & Barry Smith - 2011 - In Barcellos Almeida Mauricio, Carneiro Proietti Anna Barbara de Freitas, Jiye Ai & Smith Barry (eds.), Proceedings of the Second International Conference on Biomedical Ontology, Buffalo, NY, July 28-30, 2011 (CEUR 883). pp. (CEUR Workshop Proceedings, 833).
    Despite the importance of human blood to clinical practice and research, hematology and blood transfusion data remain scattered throughout a range of disparate sources. This lack of systematization concerning the use and definition of terms poses problems for physicians and biomedical professionals. We are introducing here the Blood Ontology, an ongoing initiative designed to serve as a controlled vocabulary for use in organizing information about blood. The paper describes the scope of the Blood Ontology, its stage of development and some (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  20. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  21.  62
    AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, among other things, Habermas's inquiries into (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Transparent, explainable, and accountable AI for robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  23. AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI.Jose Hernandez-Orallo & Karina Vold - 2019 - In Jose Hernandez-Orallo & Karina Vold (eds.), Proceedings of the AAAI/ACM. pp. 507-513.
    Humans and AI systems are usually portrayed as separate sys- tems that we need to align in values and goals. However, there is a great deal of AI technology found in non-autonomous systems that are used as cognitive tools by humans. Under the extended mind thesis, the functional contributions of these tools become as essential to our cognition as our brains. But AI can take cognitive extension towards totally new capabil- ities, posing new philosophical, ethical and technical chal- lenges. To (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  24. Interpreting AI-Generated Art: Arthur Danto’s Perspective on Intention, Authorship, and Creative Traditions in the Age of Artificial Intelligence.Raquel Cascales - 2023 - Polish Journal of Aesthetics 71 (4):17-29.
    Arthur C. Danto did not live to witness the proliferation of AI in artistic creation. However, his philosophy of art offers key ideas about art that can provide an interesting perspective on artwork generated by artificial intelligence (AI). In this article, I analyze how his ideas about contemporary art, intention, interpretation, and authorship could be applied to the ongoing debate about AI and artistic creation. At the same time, it is also interesting to consider whether the incorporation of AI into (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG frameworks (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - forthcoming - AI and Society:1-12.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  27. Ethical AI at work: the social contract for Artificial Intelligence and its implications for the workplace psychological contract.Sarah Bankins & Paul Formosa - 2021 - In Sarah Bankins & Paul Formosa (eds.), Ethical AI at Work: The Social Contract for Artificial Intelligence and Its Implications for the Workplace Psychological Contract. Cham, Switzerland: pp. 55-72.
    Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, non-maleficence, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  28. AI Wellbeing.Simon Goldstein & Cameron Domenico Kirk-Giannini - manuscript
    Under what conditions would an artificially intelligent system have wellbeing? Despite its obvious bearing on the ethics of human interactions with artificial systems, this question has received little attention. Because all major theories of wellbeing hold that an individual’s welfare level is partially determined by their mental life, we begin by considering whether artificial systems have mental states. We show that a wide range of theories of mental states, when combined with leading theories of wellbeing, predict that certain existing artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. How AI’s Self-Prolongation Influences People’s Perceptions of Its Autonomous Mind: The Case of U.S. Residents.Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Ruining Jin, Minh-Khanh La & Tam-Tri Le - 2023 - Behavioral Sciences 13 (6):470.
    The expanding integration of artificial intelligence (AI) in various aspects of society makes the infosphere around us increasingly complex. Humanity already faces many obstacles trying to have a better understanding of our own minds, but now we have to continue finding ways to make sense of the minds of AI. The issue of AI’s capability to have independent thinking is of special attention. When dealing with such an unfamiliar concept, people may rely on existing human properties, such as survival desire, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  30. AI Art is Theft: Labour, Extraction, and Exploitation, Or, On the Dangers of Stochastic Pollocks.Trystan S. Goetze - 2024 - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency:186-196.
    Since the launch of applications such as DALL-E, Midjourney, and Stable Diffusion, generative artificial intelligence has been controversial as a tool for creating artwork. While some have presented longtermist worries about these technologies as harbingers of fully automated futures to come, more pressing is the impact of generative AI on creative labour in the present. Already, business leaders have begun replacing human artistic labour with AI-generated images. In response, the artistic community has launched a protest movement, which argues that AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Medical AI: is trust really the issue?Jakob Thrane Mainz - 2024 - Journal of Medical Ethics 50 (5):349-350.
    I discuss an influential argument put forward by Hatherley in theJournal of Medical Ethics. Drawing on influential philosophical accounts of interpersonal trust, Hatherley claims that medical artificial intelligence is capable of being reliable, but not trustworthy. Furthermore, Hatherley argues that trust generates moral obligations on behalf of the trustee. For instance, when a patient trusts a clinician, it generates certain moral obligations on behalf of the clinician for her to do what she is entrusted to do. I make three objections (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriate- (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  33. AI and its new winter: from myths to realities.Luciano Floridi - 2020 - Philosophy and Technology 33 (1):1-3.
    An AI winter may be defined as the stage when technology, business, and the media come to terms with what AI can or cannot really do as a technology without exaggeration. Through discussion of previous AI winters, this paper examines the hype cycle (which by turn characterises AI as a social panacea or a nightmare of apocalyptic proportions) and argues that AI should be treated as a normal technology, neither as a miracle nor as a plague, but rather as of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  34. Does AI Make It Impossible to Write an 'Original' Sentence (Is it Fair to Mechanically Test Originality).William M. Goodman - 2023 - The Toronto Star 2023 (September 28):A19.
    As a retired professor, I join in the growing concerns among educators, and others, about plagiarism, especially now that AI tools like ChatGPT are so readily available. However, I feel more caution is needed, regarding temptations to rely on supposed automatic detection tools, like Turnitin, to solve the problems. Students can be unfairly accused if such tools are used unreflectingly. The Toronto Star's online version of this published Op Ed is available at the link shown below. The version attached here (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  36. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Turchin Alexey & David Denkenberger (eds.), Artificial Intelligence Safety and Security. CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  37. RESPONSIBLE AI: INTRODUCTION OF “NOMADIC AI PRINCIPLES” FOR CENTRAL ASIA.Ammar Younas - 2020 - Conference Proceeding of International Conference Organized by Jizzakh Polytechnical Institute Uzbekistan.
    We think that Central Asia should come up with its own AI Ethics Principles which we propose to name as “Nomadic AI Principles”.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. AI Extenders and the Ethics of Mental Health.Karina Vold & Jose Hernandez-Orallo - forthcoming - In Marcello Ienca & Fabrice Jotterand (eds.), Ethics of Artificial Intelligence in Brain and Mental Health.
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  39. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  40. Toward an Ethics of AI Assistants: an Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  41. How AI can be a force for good.Mariarosaria Taddeo & Luciano Floridi - 2018 - Science Magazine 361 (6404):751-752.
    This article argues that an ethical framework will help to harness the potential of AI while keeping humans in control.
    Download  
     
    Export citation  
     
    Bookmark   76 citations  
  42. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  43. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  44. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  45. Theory and philosophy of AI (Minds and Machines, 22/2 - Special volume).Vincent C. Müller (ed.) - 2012 - Springer.
    Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Certifiable AI.Jobst Landgrebe - 2022 - Applied Sciences 12 (3):1050.
    Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  47. The Whiteness of AI.Stephen Cave & Kanta Dihal - 2020 - Philosophy and Technology 33 (4):685-703.
    This paper focuses on the fact that AI is predominantly portrayed as white—in colour, ethnicity, or both. We first illustrate the prevalent Whiteness of real and imagined intelligent machines in four categories: humanoid robots, chatbots and virtual assistants, stock images of AI, and portrayals of AI in film and television. We then offer three interpretations of the Whiteness of AI, drawing on critical race theory, particularly the idea of the White racial frame. First, we examine the extent to which this (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  48.  95
    Ethical AI at Work: The Social Contract for Artificial Intelligence and Its Implications for the Workplace Psychological Contract.Sarah Bankins & Paul Formosa - 2021 - In Sarah Bankins & Paul Formosa (eds.), Ethical AI at Work: The Social Contract for Artificial Intelligence and Its Implications for the Workplace Psychological Contract. Cham, Switzerland:
    Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, non-maleficence, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  49. Acceleration AI Ethics, the Debate between Innovation and Safety, and Stability AI’s Diffusion versus OpenAI’s Dall-E.James Brusseau - manuscript
    One objection to conventional AI ethics is that it slows innovation. This presentation responds by reconfiguring ethics as an innovation accelerator. The critical elements develop from a contrast between Stability AI’s Diffusion and OpenAI’s Dall-E. By analyzing the divergent values underlying their opposed strategies for development and deployment, five conceptions are identified as common to acceleration ethics. Uncertainty is understood as positive and encouraging, rather than discouraging. Innovation is conceived as intrinsically valuable, instead of worthwhile only as mediated by social (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. The AI Ensoulment Hypothesis.Brian Cutter - forthcoming - Faith and Philosophy.
    According to the AI ensoulment hypothesis, some future AI systems will be endowed with immaterial souls. I argue that we should have at least a middling credence in the AI ensoulment hypothesis, conditional on our eventual creation of AGI and the truth of substance dualism in the human case. I offer two arguments. The first relies on an analogy between aliens and AI. The second rests on the conjecture that ensoulment occurs whenever a physical system is “fit to possess” a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 1000